Structural Vulnerability of Power Grids to Disasters: Bounds, Adversarial Attacks and Reinforcement
نویسندگان
چکیده
Natural Disasters like hurricanes, floods or earthquakes can damage power grid devices and create cascading blackouts and islands. The nature of failure propagation and extent of damage is dependent on the structural features of the grid, which is different from that of random networks. This paper analyzes the structural vulnerability of real power grids to impending disasters and presents intuitive graphical metrics to quantify the extent of damage. Two improved graph eigenvalue based bounds on the grid vulnerability are developed and demonstrated through simulations of failure propagation on IEEE test cases and real networks. Finally this paper studies adversarial attacks aimed at weakening the grid’s structural resilience and presents two approximate schemes to determine the critical transmission lines that may be attacked to minimize grid resilience. The framework can be also be used to design protection schemes to secure the grid against such adversarial attacks. Simulations on power networks are used to compare the performance of the attack schemes in reducing grid resilience.
منابع مشابه
Models and Framework for Adversarial Attacks on Complex Adaptive Systems
We introduce the paradigm of adversarial attacks that target the dynamics of Complex Adaptive Systems (CAS). To facilitate the analysis of such attacks, we present multiple approaches to the modeling of CAS as dynamical, datadriven, and game-theoretic systems, and develop quantitative definitions of attack, vulnerability, and resilience in the context of CAS security. Furthermore, we propose a ...
متن کاملVulnerability of Deep Reinforcement Learning to Policy Induction Attacks
Deep learning classifiers are known to be inherently vulnerable to manipulation by intentionally perturbed inputs, named adversarial examples. In this work, we establish that reinforcement learning techniques based on Deep Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and verify the transferability of adversarial examples across different DQN models. Furthermore, we ...
متن کاملWhatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger
Recent developments have established the vulnerability of deep Reinforcement Learning (RL) to policy manipulation attacks via adversarial perturbations. In this paper, we investigate the robustness and resilience of deep RL to training-time and test-time attacks. Through experimental results, we demonstrate that under noncontiguous trainingtime attacks, Deep Q-Network (DQN) agents can recover a...
متن کاملDetecting Adversarial Attacks on Neural Network Policies with Visual Foresight
Deep reinforcement learning has shown promising results in learning control policies for complex sequential decision-making tasks. However, these neural network-based policies are known to be vulnerable to adversarial examples. This vulnerability poses a potentially serious threat to safety-critical systems such as autonomous vehicles. In this paper, we propose a defense mechanism to defend rei...
متن کاملThe structure of electrical networks: a graph theory based analysis
We study the vulnerability of electrical networks through structural analysis from a graph theory point of view. We measure and compare several important structural properties of different electrical networks, including a real power grid and several synthetic grids, as well as other infrastructural networks. The properties we consider include the minimum dominating set size, the degree distribu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1509.07449 شماره
صفحات -
تاریخ انتشار 2015